Salary: ₹40 - ₹65 Lakhs/Annum Expected
Description:
Morgan Stanley is seeking a hands-on Director-level Data Engineer to join its Firmwide Data Office (CDRR Technology) within the Data & Analytics Engineering function. This role is designed for engineers who are passionate about solving complex business problems through strong data engineering practices, innovation, and scalable architecture.
You will play a key role in building and optimizing enterprise-grade data pipelines, designing robust data models, and enabling high-quality analytics and insights across the firm. The position combines deep technical ownership with strategic collaboration, working closely with application teams and business stakeholders to ensure data accuracy, accessibility, and performance.
This role also provides exposure to advanced analytics, AI/ML-driven techniques, and large-scale data platforms, making it ideal for senior data engineers who want to influence firmwide data strategy while remaining technically hands-on.
What you’ll work on:
- Designing and building robust data ingestion pipelines using Python, PySpark, and Snowflake
- Developing scalable ETL workflows using Airflow / Autosys
- Creating and maintaining enterprise and dimensional data models
- Optimizing data pipelines for performance, reliability, and data quality
- Automating data cleansing, transformation, and reformatting using Python
- Integrating data from multiple sources such as DB2, SQL Server, and flat files
- Partnering with application teams on database design, development, and support
- Applying DevOps practices in the data engineering space
- Supporting analytics, reporting, and AI/ML-driven use cases
- Participating across the full SDLC with a strong focus on delivery and quality
Key Technical Skills:
Python, PySpark, Snowflake, SQL, Airflow, Autosys, Data Engineering, ETL, Data Modeling (OLTP & Dimensional), Unix/Linux, DevOps for Data, Distributed Processing
Requirements:
- 4+ years of strong hands-on experience building data ingestion and ETL pipelines
- Deep expertise in Python for automation, data cleansing, and transformations
- Strong experience with PySpark and distributed data processing
- Hands-on experience with Snowflake and cloud-based data platforms
- Solid understanding of relational databases, complex SQL, stored procedures, and performance tuning
- Strong data modeling skills including Enterprise Data Models and Dimensional Models
- Experience designing and developing complex ETL mappings and transformations
- Experience integrating multiple data sources into staging and analytics layers
- Strong problem-solving skills and business understanding
- Excellent written and verbal communication skills with both technical and non-technical stakeholders
- Proficiency working in Unix/Linux environments
- Bachelor’s or Master’s degree in Computer Science, Computer Engineering, Data Analytics, or related field
Important Notice:
This job description and related content are owned by Morgan Stanley. We are only sharing this information to help job seekers find opportunities. For application procedures, status, or any related concerns, please contact Morgan Stanley directly. We do not process applications or respond to candidate queries.